Sparse Regularization with l Penalty Term
نویسندگان
چکیده
We consider the stable approximation of sparse solutions to non-linear operator equations by means of Tikhonov regularization with a subquadratic penalty term. Imposing certain assumptions, which for a linear operator are equivalent to the standard range condition, we derive the usual convergence rate O( √ δ) of the regularized solutions in dependence of the noise level δ. Particular emphasis lies on the case, where the true solution is known to have a sparse representation in a given basis. In this case, if the differential of the operator satisfies a certain injectivity condition, we can show that the actual convergence rate improves up to O(δ). MSC: 65J20; 65J22, 49N45.
منابع مشابه
Norm Regularization Algorithm for Image Deconvolution
Up to now, the non-convex l p (0 < p < 1) norm regularization function has shown good performance for sparse signal processing. Indeed, it benefits from a significantly heavier-tailed hyper-Laplacian model, which is desirable in the context of image gradient distributions. Both l 1/2 and l 2/3 regularization methods have been given analytic solutions and fast closed-form thresholding formulae i...
متن کاملConvergence rates for Morozov’s Discrepancy Principle using Variational Inequalities
We derive convergence rates for Tikhonov-type regularization with convex penalty terms, where the regularization parameter is chosen according to Morozov’s discrepancy principle and variational inequalities are used to generalize classical source and nonlinearity conditions. Rates are obtained first with respect to the Bregman distance and a Taylor-type distance and those results are combined t...
متن کاملSparseCodePicking: feature extraction in mass spectrometry using sparse coding algorithms
Mass spectrometry (MS) is an important technique for chemical profiling which calculates for a sample a high dimensional histogram-like spectrum. A crucial step of MS data processing is the peak picking which selects peaks containing information about molecules with high concentrations which are of interest in an MS investigation. We present a new procedure of the peak picking based on a sparse...
متن کاملConvergence of online gradient method for feedforward neural networks with smoothing L1/2 regularization penalty
Minimization of the training regularization term has been recognized as an important objective for sparse modeling and generalization in feedforward neural networks. Most of the studies so far have been focused on the popular L2 regularization penalty. In this paper, we consider the convergence of online gradient method with smoothing L1/2 regularization term. For normal L1/2 regularization, th...
متن کاملNon-convex Sparse Regularization
We study the regularising properties of Tikhonov regularisation on the sequence space l with weighted, non-quadratic penalty term acting separately on the coefficients of a given sequence. We derive sufficient conditions for the penalty term that guarantee the well-posedness of the method, and investigate to which extent the same conditions are also necessary. A particular interest of this pape...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2008